80 research outputs found
TO EVALUATE THE EFFECT OF PAPAYA - TANKAN KSHAR SUTRA IN RECURRENT PILONIDAL SINUS
Shalya Tantra is a branch of Ayurveda which deals with surgical as well as parasurgical procedures like Kshar karma, Agnikarma & Raktamokshana. This study elicits a case report of a recurrent Pilonidal sinus treated by the intervention of Papaya-Tankan Kshar Sutra, which cured and demolished the symptoms. The incidence of Pilonidal sinus is approximately 26 / 100,000; it is a benign disease that occurs in young adults in the age group of 15 - 30 years after puberty when sex hormones are known to affect pilosebaceous glands & change healthy body hair growth. The etiology and pathogenesis of Pilonidal sinus are not clear although the disease is thought to be related to the accumulation of weak and lifeless hair in the intergluteal region. Over time, foreign body reaction occurs, causing abscess and sinus formation. Pilonidal sinus can be correlated with Nadivrana in Ayurveda. Acharya Sushruta the father of surgery has described Nadivrana first time in detail including etiological factors, classifications, symptomatology, pathology, complications & its management in a most scientific manner
Understanding the Impact of Early Citers on Long-Term Scientific Impact
This paper explores an interesting new dimension to the challenging problem
of predicting long-term scientific impact (LTSI) usually measured by the number
of citations accumulated by a paper in the long-term. It is well known that
early citations (within 1-2 years after publication) acquired by a paper
positively affects its LTSI. However, there is no work that investigates if the
set of authors who bring in these early citations to a paper also affect its
LTSI. In this paper, we demonstrate for the first time, the impact of these
authors whom we call early citers (EC) on the LTSI of a paper. Note that this
study of the complex dynamics of EC introduces a brand new paradigm in citation
behavior analysis. Using a massive computer science bibliographic dataset we
identify two distinct categories of EC - we call those authors who have high
overall publication/citation count in the dataset as influential and the rest
of the authors as non-influential. We investigate three characteristic
properties of EC and present an extensive analysis of how each category
correlates with LTSI in terms of these properties. In contrast to popular
perception, we find that influential EC negatively affects LTSI possibly owing
to attention stealing. To motivate this, we present several representative
examples from the dataset. A closer inspection of the collaboration network
reveals that this stealing effect is more profound if an EC is nearer to the
authors of the paper being investigated. As an intuitive use case, we show that
incorporating EC properties in the state-of-the-art supervised citation
prediction models leads to high performance margins. At the closing, we present
an online portal to visualize EC statistics along with the prediction results
for a given query paper
Incremental and Decremental Nonparametric Discriminant Analysis for Face Recognition
Nonparametric Discriminant Analysis (NDA) possesses inherent advantages over Linear Discriminant Analysis (LDA) such as capturing the boundary structure of samples and avoiding matrix inversion. In this paper, we present a novel method for constructing an updated Nonparametric Discriminant Analysis (NDA) model for face recognition. The proposed method is applicable to scenarios where bursts of data samples are added to the existing model in random chunks. Also, the samples which degrade the performance of the model need to be removed. For both of these problems, we propose incremental NDA (INDA) and decremental NDA (DNDA) respectively. Experimental results on four publicly available datasets viz. AR, PIE, ORL and Yale show the efficacy of the proposed method. Also, the proposed method requires less computation time in comparison to batch NDA which makes it suitable for real time applications
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter
Large pre-trained transformers are show-stealer in modern-day deep learning,
and it becomes crucial to comprehend the parsimonious patterns that exist
within them as they grow in scale. With exploding parameter counts, Lottery
Ticket Hypothesis (LTH) and its variants, have lost their pragmatism in
sparsifying them due to high computation and memory bottleneck of the
repetitive train-prune-retrain routine of iterative magnitude pruning (IMP)
which worsens with increasing model size. In this paper, we comprehensively
study induced sparse patterns across multiple large pre-trained vision and
language transformers. We propose the existence of -- essential sparsity
defined with a sharp dropping point beyond which the performance declines much
faster w.r.t the rise of sparsity level, when we directly remove weights with
the smallest magnitudes in one-shot. In the sparsity-performance curve We also
present an intriguing emerging phenomenon of abrupt sparsification during the
pre-training of BERT, i.e., BERT suddenly becomes heavily sparse in
pre-training after certain iterations. Moreover, our observations also indicate
a counter-intuitive finding that BERT trained with a larger amount of
pre-training data tends to have a better ability to condense knowledge in
comparatively relatively fewer parameters. Lastly, we investigate the effect of
the pre-training loss on essential sparsity and discover that self-supervised
learning (SSL) objectives trigger stronger emergent sparsification properties
than supervised learning (SL). Our codes are available at
\url{https://github.com/VITA-Group/essential\_sparsity}
Convection-Enhanced Delivery of Antiangiogenic Drugs and Liposomal Cytotoxic Drugs to Heterogeneous Brain Tumor for Combination Therapy
Acknowledgments The authors thank RK Gupta for providing the clinical DCE-MRI data of human brain tumors. Funding Ajay Bhandari would like to acknowledge the support received by a grant from the Science and Engineering Research Board (Grant Number: SRG/2021/000053) and the Indian Institute of Technology (Indian School of Mines), Dhanbad (Grant Number: FRS (147)/2020-2021/MECH). Wenbo Zhan would like to acknowledge the support received from the Children with Cancer UK under the project Children’s Brain Tumor Drug Delivery Consortium (Grant Number:16-224). Both authors would like to acknowledge the support received from the Royal Society (Grant Number: IES\R1\221015).Peer reviewedPublisher PD
Physics-Driven Turbulence Image Restoration with Stochastic Refinement
Image distortion by atmospheric turbulence is a stochastic degradation, which
is a critical problem in long-range optical imaging systems. A number of
research has been conducted during the past decades, including model-based and
emerging deep-learning solutions with the help of synthetic data. Although fast
and physics-grounded simulation tools have been introduced to help the
deep-learning models adapt to real-world turbulence conditions recently, the
training of such models only relies on the synthetic data and ground truth
pairs. This paper proposes the Physics-integrated Restoration Network (PiRN) to
bring the physics-based simulator directly into the training process to help
the network to disentangle the stochasticity from the degradation and the
underlying image. Furthermore, to overcome the ``average effect" introduced by
deterministic models and the domain gap between the synthetic and real-world
degradation, we further introduce PiRN with Stochastic Refinement (PiRN-SR) to
boost its perceptual quality. Overall, our PiRN and PiRN-SR improve the
generalization to real-world unknown turbulence conditions and provide a
state-of-the-art restoration in both pixel-wise accuracy and perceptual
quality. Our codes are available at \url{https://github.com/VITA-Group/PiRN}.Comment: Accepted by ICCV 202
Single Frame Atmospheric Turbulence Mitigation: A Benchmark Study and A New Physics-Inspired Transformer Model
Image restoration algorithms for atmospheric turbulence are known to be much
more challenging to design than traditional ones such as blur or noise because
the distortion caused by the turbulence is an entanglement of spatially varying
blur, geometric distortion, and sensor noise. Existing CNN-based restoration
methods built upon convolutional kernels with static weights are insufficient
to handle the spatially dynamical atmospheric turbulence effect. To address
this problem, in this paper, we propose a physics-inspired transformer model
for imaging through atmospheric turbulence. The proposed network utilizes the
power of transformer blocks to jointly extract a dynamical turbulence
distortion map and restore a turbulence-free image. In addition, recognizing
the lack of a comprehensive dataset, we collect and present two new real-world
turbulence datasets that allow for evaluation with both classical objective
metrics (e.g., PSNR and SSIM) and a new task-driven metric using text
recognition accuracy. Both real testing sets and all related code will be made
publicly available.Comment: This paper is accepted as a poster at ECCV 202
Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication
Graphs are omnipresent and GNNs are a powerful family of neural networks for
learning over graphs. Despite their popularity, scaling GNNs either by
deepening or widening suffers from prevalent issues of unhealthy gradients,
over-smoothening, information squashing, which often lead to sub-standard
performance. In this work, we are interested in exploring a principled way to
scale GNNs capacity without deepening or widening, which can improve its
performance across multiple small and large graphs. Motivated by the recent
intriguing phenomenon of model soups, which suggest that fine-tuned weights of
multiple large-language pre-trained models can be merged to a better minima, we
argue to exploit the fundamentals of model soups to mitigate the aforementioned
issues of memory bottleneck and trainability during GNNs scaling. More
specifically, we propose not to deepen or widen current GNNs, but instead
present a data-centric perspective of model soups tailored for GNNs, i.e., to
build powerful GNNs. By dividing giant graph data, we build multiple
independently and parallelly trained weaker GNNs (soup ingredient) without any
intermediate communication, and combine their strength using a greedy
interpolation soup procedure to achieve state-of-the-art performance. Compared
to concurrent distributed GNN training works such as Jiong et. al. 2023, we
train each soup ingredient by sampling different subgraphs per epoch and their
respective sub-models are merged only after being fully trained (rather than
intermediately so). Moreover, we provide a wide variety of model soup
preparation techniques by leveraging state-of-the-art graph sampling and graph
partitioning approaches that can handle large graphs. Codes are available at:
\url{https://github.com/VITA-Group/graph_ladling}.Comment: Accepted in ICML 2023. Included comparison with a concurrent work
(Jiong et. al. 2023) which independently presents similar ideas, among other
SOTA distributed GNN training work
- …